16 research outputs found
Recommended from our members
BlueSense: designing an extensible platform for wearable motion sensing, sensor research and IoT applications
We present an extensible sensor research platform for wearable and IoT applications. The result is a 30x30mm platform capable of 500Hz motion and orientation sensing using 98mW when logging the data. The platform can wake up at programmed intervals using only 70uW in hardware off mode. A maximum 0.6ppm time deviation between nodes allows usage in a network for whole body movement sensing
Recommended from our members
Movement recognition from wearable sensors data: power-aware evolutionary training for template matching and data annotation recovery methods
Human activities recognition finds numerous applications for example in sport training, patient rehabilitation, gait analysis and surgical skills evaluation. Wearable sensing and Template Matching Methods (TMMs) offer significant advantages compared to manual assessment methods as well as to more cumbersome camera-based setups and other machine learning (ML) algorithms.
TMMs require less data for training than other ML methods, they are low-power and therefore suitable for integration on wearable sensor. They compute a sample-by-sample distance between two time series. When applied to gestures sensors data, this even enables a richer and more movement-specific assessment and feedback. However, TMMs lack of a standard training procedure.
In this thesis, we introduce an innovative evolutionary training algorithm for TMMthat not only can maximize recognition performance, but it can also prefer power-minimisation by reducing the TMMâs computational cost with a configurable trade-off. We exhibit that a reduction is even possible without sacrificing recognition performance by exploiting the long-established concept of âtime warpingâ. We demonstrate that our method is suitable for a wide variety of raw data as well as processed, fused and encoded sensor data.
We present a new original multi-modal, multi-user dataset of beach volleyball movements that allowed to evaluate our training methods on a real-case of sport training actions. Moreover, the collection of this dataset helped to generate a set of guidelines for the collection of movement data in the wild, using wearable sensors.
We introduce a 3D human model that can be animated through inertial wearable sensors data for troubleshooting, movement analysis and privacy-safe annotation of human activities. Finally, through a case study on a dataset of drinking actions, we demonstrate how TMM can improve the quality of a badly annotated but highly valuable dataset
Recommended from our members
High reliability Android application for multidevice multimodal mobile data acquisition and annotation
We have completed the collection of one of the richest accurately annotated mobile dataset of modes of transportation and locomotion. To do this, we developed a highly reliable Android application called DataLogger capable of recording multisensor data from multiple synchronized smartphones simultaneously. The application allows real-time data annotation. We explain how we designed the app to achieve high reliability and ease of use. We also present an evaluation of the application in a big-data collection (750 hours, 950 GB of data, 17 different sensor modalities), analysing the data loss (less than 0.4â°) and battery consumption (â6% on average per hour). The application is available as open source
Recommended from our members
A versatile annotated dataset for multimodal locomotion analytics with mobile devices
We explain how to obtain a highly versatile and precisely annotated dataset for the multimodal locomotion of mobile users. After presenting the experimental setup, data management challenges and potential applications, we conclude with the best practices for assuring data quality and reducing loss. The dataset currently comprises 7 months of measurements, collected by smartphoneâs sensors and a body-worn camera, while the 3 participants used 8 different modes of transportation. It comprises 950 GB of sensor data, which corresponds to 750 hours of labelled data. The obtained data will be useful for a wide range of research questions related to activity recognition, and will be made available to the community
The University of Sussex-Huawei locomotion and transportation dataset for multimodal analytics with mobile devices
Scientific advances build on reproducible research which need publicly available benchmark datasets. The computer vision and speech recognition communities have led the way in establishing benchmark datasets. There are much less datasets available in mobile computing, especially for rich locomotion and transportation analytics.
This paper presents a highly versatile and precisely annotated large-scale dataset of smartphone sensor data for multimodal locomotion and transportation analytics of mobile users. The dataset comprises 7 months of measurements, collected from all sensors of 4 smartphones carried at typical body locations, including the images of a body-worn camera, while 3 participants used 8 different modes of transportation in the southeast of the United Kingdom, including in London. In total 28 context labels were annotated, including transportation mode, participantâs posture, inside/outside location, road conditions, traffic conditions, presence in tunnels, social interactions, and having meals. The total amount of collected data exceed 950 GB of sensor data, which corresponds to 2812 hours of labelled data and 17562 km of traveled distance. We present how we set up the data collection, including the equipment used and the experimental protocol.
We discuss the dataset, including the data curation process, the analysis of the annotations and of the sensor data. We discuss the challenges encountered and present the lessons learned and some of the best practices we developed to ensure high quality data collection and annotation. We discuss the potential applications which can be developed using this large-scale dataset. In particular, we present how a machine-learning system can use this dataset to automatically recognize modes of transportations. Many other research questions related to transportation analytics, activity recognition, radio signal propagation and mobility modelling can be adressed through this dataset. The full dataset is being made available to the community, and a thorough preview is already publishe
Recommended from our members
Demo: Complex human gestures encoding from wearable inertial sensors for activity recognition
We demonstrate a method to encode complex human gestures acquired from inertial sensors for activity recognition. Gestures are encoded as a stream of symbols which represent the change in orientation and displacement of the body limbs over time.
The first novelty of this encoding is to enable the reuse previously developed single-channel template matching algorithms also when multiple sensors are used simultaneously.
The second novelty is to encode changes in orientation of limbs which is important in some activities, such as sport analytics.
We demonstrate the method using our custom inertial platform, BlueSense. Using a set of five BlueSense nodes, we implemented a motion tracking system that displays a 3D human model and shows in real-time the corresponding movement encoding
WLCSSLearn: learning algorithm for template matching-based gesture recognition systems
Template matching algorithms are well suited for gesture recognition, but unlike other machine learning approaches there are no established methods to optimize their parameters. We present WLCSSLearn: an optimization approach for the WarpingLCSS algorithm based on a genetic algorithms. We demonstrate that WLCSSLearn makes the optimization procedure automatic, fast and suitable for new recognition problems even when there is no a-priori knowledge about suitable range of parameter values. We evaluate WLCSSLearn on three different datasets of gestures. We demonstrated that our method increased the accuracy and F1 score up to 20% compared to previous literature
Exploring human activity annotation using a privacy preserving 3D model
Annotating activity recognition datasets is a very time consuming process. Using lay annotators (e.g. using crowdsourcing) has been suggested to speed this up. However, this requires to preserve privacy of users and may preclude relying on video for annotation. We investigate to which extent using a 3D human model animated from the data of inertial sensors placed on the limbs allows for annotation of human activities. The animated model is shown to 6 people in a suite of tests in order to understand the accuracy of the labelling. We present the model and the dataset, then we present the experiments including the number of activities. We present 3 experiments where we investigate the use of a 3D model for i) activity segmentation, ii) for "openended" annotation where users freely describe the activity they see on screen, and iii) traditional annotation, where users pick one activity among a pre-defined list of activities. In the latter case, results show that users recognise with 56% accuracy when picking from 11 possible activities
Recommended from our members
WLCSSLearn: learning algorithm for template matching-based gesture recognition systems
Template matching algorithms are well suited for gesture recognition, but unlike other machine learning approaches there are no established methods to optimize their parameters. We present WLCSSLearn: an optimization approach for the WarpingLCSS algorithm based on a genetic algorithms. We demonstrate that WLCSSLearn makes the optimization procedure automatic, fast and suitable for new recognition problems even when there is no a-priori knowledge about suitable range of parameter values. We evaluate WLCSSLearn on three different datasets of gestures. We demonstrated that our method increased the accuracy and F1 score up to 20% compared to previous literature
Three-year review of the 2018â2020 SHL challenge on transportation and locomotion mode recognition from mobile sensors
The Sussex-Huawei Locomotion-Transportation (SHL) Recognition Challenges aim to advance and capture the state-of-the-art in locomotion and transportation mode recognition from smartphone motion (inertial) sensors. The goal of this series of machine learning and data science challenges was to recognize eight locomotion and transportation activities (Still, Walk, Run, Bus, Car, Train, Subway). The three challenges focused on time-independent (SHL 2018), position-independent (SHL 2019) and userindependent (SHL 2020) evaluations, respectively. Overall, we received 48 submissions (out of 93 teams who registered interest) involving 201 scientists over the three years. The survey captures the state-of-the-art through a meta-analysis of the contributions to the three challenges, including approaches, recognition performance, computational requirements, software tools and frameworks used. It was shown that state-of-the-art methods can distinguish with relative ease most modes of transportation, although the differentiating between subtly distinct activities, such as rail transport (Train and Subway) and road transport (Bus and Car) still remains challenging. We summarize insightful methods from participants that could be employed to address practical challenges of transportation mode recognition, for instance, to tackle over-fitting, to employ robust representations, to exploit data augmentation, and to exploit smart post-processing techniques to improve performance. Finally, we present baseline results to compare the three challenges with a unified recognition pipeline and decision window length